Skip to content

Conversation

@sca075
Copy link
Owner

@sca075 sca075 commented Dec 20, 2025

Expose FloorData and TrimsData types in library init.py

Summary by CodeRabbit

  • New Features

    • Added comprehensive color management system with support for user-defined and room-specific colors.
    • Introduced advanced image drawing utilities for map visualization including shapes, text rendering, and color blending.
    • Added map data processing and parsing utilities for multiple vacuum formats.
    • Enhanced map image handlers with improved room detection and calibration support.
  • Removed Features

    • Removed OptimizedElementMapGenerator module.
  • Chores

    • Updated package version to 0.1.14b0.
    • Exposed FloorData and TrimsData as public types.

✏️ Tip: You can customize this high-level summary in your review settings.

Signed-off-by: Sandro Cantarella <sandro@Sandros-Mac-mini.fritz.box>
Signed-off-by: Sandro Cantarella <sandro@79f3d049-9006-400d-9954-2e5dcd250fa9.fritz.box>
Signed-off-by: Sandro Cantarella <sandro@Sandros-Mac-mini.fritz.box>
Signed-off-by: SCA075 <82227818+sca075@users.noreply.github.com>
- Add FloorData and TrimsData to imports from config.types
- Add both types to __all__ list for public API
- Enables MQTT Vacuum Camera to import these types for multi-floor support
Signed-off-by: SCA075 <82227818+sca075@users.noreply.github.com>
Signed-off-by: SCA075 <82227818+sca075@users.noreply.github.com>
@sca075 sca075 self-assigned this Dec 20, 2025
@coderabbitai
Copy link

coderabbitai bot commented Dec 20, 2025

Walkthrough

This pull request introduces substantial refactoring and feature expansion: removes the OptimizedElementMapGenerator module, adds comprehensive color management (ColorsManagement, DefaultColors, SupportedColor), introduces multiple image-handling and room-extraction modules for both Hypfer and Rand256 formats, removes enable_snapshots initialization, exposes FloorData and TrimsData types, establishes a new pytest test suite with fixtures and integration tests, and adds numerous test analysis and profiling utilities.

Changes

Cohort / File(s) Summary
Version and Package Exports
pyproject.toml, SCR/valetudo_map_parser/__init__.py
Bumps version from 0.1.13 to 0.1.14b0. Exposes FloorData and TrimsData in public API.
Core Module Removals
SCR/valetudo_map_parser/config/optimized_element_map.py
Deletes entire OptimizedElementMapGenerator class and all associated methods for downscaled element map generation.
Core Module Updates
SCR/valetudo_map_parser/config/shared.py, SCR/valetudo_map_parser/rand256_handler.py
Removes enable_snapshots initialization in shared data update. Rand256_handler receives trailing newline (no-op).
New Color Management System
backups/colors.py
Introduces SupportedColor enum, DefaultColors container with RGB/RGBA mappings, and ColorsManagement class with alpha handling, color blending, batch operations, caching, and room-specific color logic.
New Drawing and Image Utilities
backups/drawable.ori, backups/drawable_ori.py
Parallel implementations of Drawable class providing image creation, layer rendering, primitive drawing (lines, circles, polygons), complex shapes (walls, zones, robot), text rendering, batch operations, and color blending integration.
New Hypfer Handler Modules
backups/hypfer_handler_ori.py, backups/hypfer_rooms_handler.py
Introduces HypferMapImageHandler for map JSON to image conversion with room extraction, calibration, and element control; HypferRoomsHandler for contour extraction, room property parsing, and spatial queries via Moore-Neighbor tracing and Ramer-Douglas-Peucker simplification.
New Rand256/Rand25 Handler Modules
backups/map_data_ori.py, backups/rand25_handler_rooms.py, backups/refactored_old_code.py, backups/test_old_pars.py
Introduces ImageData and RandImageData classes for map data extraction and conversion; ReImageHandler for Rand256 image generation; RRMapParser for Rand256 binary map parsing; room property extraction and Rand-to-Valetudo format conversion utilities.
New Example Script
examples/async_get_pil_image_example.py
Demonstrates usage of async_get_pil_image with HypferMapImageHandler and ReImageHandler, shared data management, and example JSON payloads.
Renovate Configuration
.github/workflows/renovate.json
Adds Renovate bot configuration with dry-run mode, bot user assignment, GitHub platform targeting, and package rule for lock-file maintenance.
Pytest Test Suite Core
new_tests/__init__.py, new_tests/conftest.py, new_tests/pytest.ini
Establishes pytest infrastructure with extensive fixtures for test data loading, device info, shared resources, sample data generators, async support, and singleton cleanup.
Config Module Tests
new_tests/config/__init__.py, new_tests/config/test_colors.py, new_tests/config/test_drawable.py, new_tests/config/test_shared.py, new_tests/config/test_status_text.py, new_tests/config/test_types.py
Comprehensive unit tests for ColorsManagement, DefaultColors, SupportedColor; DrawableElement and DrawingConfig; CameraShared singleton behavior; StatusText translations and rendering; TrimCropData, TrimsData, FloorData serialization and stores.
Integration Tests
new_tests/integration/__init__.py, new_tests/integration/test_basic_integration.py
End-to-end tests for Hypfer and Rand256 image generation, calibration points, room detection, and multi-vacuum instance isolation.
Map Data Tests
new_tests/test_map_data.py
Tests for ImageData, RandImageData, and HyperMapData including layer finding, coordinate extraction, and image size retrieval.
Test Documentation
new_tests/README.md, new_tests/IMPLEMENTATION_SUMMARY.md, new_tests/TEST_RESULTS_SUMMARY.md, new_tests/FIXES_APPLIED.md
Comprehensive documentation of test suite structure, fixtures, running instructions, test results, identified issues, and applied fixes.
Test Utilities and Analysis Scripts
tests/rooms_test.py, tests/rand_rooms_test.py, tests/analyze_room*.py, tests/benchmark_margins.py, tests/compare_payloads.py, tests/convert_rand_to_hypfer.py, tests/debug_binary.py, tests/debug_manage_rooms.py, tests/profiling_requirements.txt, tests/PROFILING_README.md
New analysis, debugging, and profiling utilities for room extraction, payload comparison, format conversion, binary inspection, and performance benchmarking with detailed documentation.
Test Data and Comparison
tests/glossyhardtofindnarwhal.json, tests/test_floor_data.py, tests/test_hypfer_profiling.py, tests/test_mvcrender.py, tests/test_parser.py, tests/test_parser_comparison.py, tests/test_rand_to_hypfer_compression.py, tests/test_room_store.py, tests/test_status_text_performance.py, tests/RAND_TO_HYPFER_COMPRESSION_RESULTS.md, tests/VALETUDO_MAP_PARSER_TYPES_USAGE_REPORT.md
New test data files, performance/profiling tests for Hypfer handlers and mvcrender, parser comparison and validation tests, room store singleton tests, compression analysis, and comprehensive type usage report.
Additional Test Scripts
tests/tests/__init__.py (implicit), tests/tests/test_all_bins.py, tests/tests/test_robot_angles.py, tests/tests/comparison.txt
Multi-parser testing framework, robot angle calculation comparison and tuning, and test execution logs.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50–75 minutes

Specific areas requiring attention:

  • Color management system (backups/colors.py): Dense logic with alpha handling, batch operations, caching, and fallback strategies across multiple color blending pathways.
  • Handler implementations (backups/hypfer_handler_ori.py, backups/rand25_handler_rooms.py): Substantial multi-pass image generation pipelines with room extraction, calibration, and conditional rendering; room contour extraction via Moore-Neighbor tracing and Ramer-Douglas-Peucker simplification.
  • Parser implementations (backups/test_old_pars.py, backups/map_data_ori.py): Binary block parsing, coordinate transformation, and format conversion logic with multiple branching paths for Rand256 and Valetudo structures.
  • Drawable and image utilities (backups/drawable_ori.py): Extensive image manipulation with primitive drawing, compositing, batch operations, rotation handling, and color blending integration.
  • Module deletions and shared.py changes: Verify that OptimizedElementMapGenerator removal does not break existing code paths and that enable_snapshots removal is intentional.
  • Test suite scope: Large, heterogeneous test additions spanning config, integration, and utility layers; verify fixture design, test isolation, and proper cleanup.

Possibly related PRs

Suggested labels

enhancement, refactoring, testing

Poem

🐰 Colors now blend in harmony,
Rooms extracted by geometry,
Maps rendered with flair,
Tests everywhere!
This parser grows splendidly! 🎨✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately summarizes the primary change: exposing FloorData and TrimsData types in the library's init.py file.
Docstring Coverage ✅ Passed Docstring coverage is 90.33% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch expose_floor_data_types

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 17

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🟡 Minor comments (19)
pyproject.toml-3-3 (1)

3-3: Version mismatch between pyproject.toml and init.py.

The version here is 0.1.14b0 (beta), but SCR/valetudo_map_parser/__init__.py declares Version: 0.1.14 (without the beta suffix). Consider aligning both version declarations to avoid confusion.

tests/VALETUDO_MAP_PARSER_TYPES_USAGE_REPORT.md-2-2 (1)

2-2: Incorrect generation date.

The document states it was generated on 2025-10-18, but the PR was created on 2025-12-20. Update the date to reflect when this was actually generated.

🔎 Proposed fix
 # Valetudo Map Parser Types Usage Report
-**Generated:** 2025-10-18  
+**Generated:** 2025-12-20  
 **Purpose:** Comprehensive analysis of all valetudo_map_parser types, classes, and constants currently in use
tests/convert_rand_to_hypfer.py-309-312 (1)

309-312: Potential division by zero.

If compressed_pixels is 0, lines 311-312 will raise a ZeroDivisionError.

🔎 Proposed fix
     print(f"Original pixel data: {original_pixels} values")
     print(f"Compressed pixel data: {compressed_pixels} values")
-    print(f"Compression ratio: {original_pixels / compressed_pixels:.2f}x")
-    print(f"Memory reduction: {(1 - compressed_pixels/original_pixels) * 100:.1f}%")
+    if compressed_pixels > 0 and original_pixels > 0:
+        print(f"Compression ratio: {original_pixels / compressed_pixels:.2f}x")
+        print(f"Memory reduction: {(1 - compressed_pixels/original_pixels) * 100:.1f}%")
+    else:
+        print("Unable to calculate compression ratio (no pixel data)")
backups/rand25_handler_rooms.py-180-180 (1)

180-180: Invalid return type annotation syntax.

PilPNG or None is not valid Python type syntax. Use Optional[PilPNG] or PilPNG | None (Python 3.10+).

🔎 Proposed fix
+from typing import Any, Optional
+
 async def get_image_from_rrm(
     self,
     m_json: JsonType,  # json data
     destinations: None = None,  # MQTT destinations for labels
-) -> PilPNG or None:
+) -> Optional[PilPNG]:

Committable suggestion skipped: line range outside the PR's diff.

backups/colors.py-456-456 (1)

456-456: Return type mismatch: Returns list instead of tuple.

The method signature and docstring indicate returning a Color (tuple), but line 456 returns a list [out_r, out_g, out_b, out_a]. This inconsistency may cause issues if downstream code expects tuple operations (e.g., unpacking, hashing).

🔎 Suggested fix
-        return [out_r, out_g, out_b, out_a]
+        return (out_r, out_g, out_b, out_a)
backups/drawable_ori.py-34-40 (1)

34-40: Minor: Comment does not match the constant value.

The comment on line 34 says "Red color for error messages" but ERROR_OUTLINE is set to black (0, 0, 0, 255). Similarly, line 40's comment says "Red color with lower opacity for error outlines" but the constant name is ERROR_COLOR. The naming and comments are swapped.

🔎 Suggested fix
-    ERROR_OUTLINE: Color = (0, 0, 0, 255)  # Red color for error messages
-    ERROR_COLOR: Color = (
-        255,
-        0,
-        0,
-        191,
-    )  # Red color with lower opacity for error outlines
+    ERROR_OUTLINE: Color = (0, 0, 0, 255)  # Black color for error outlines
+    ERROR_COLOR: Color = (
+        255,
+        0,
+        0,
+        191,
+    )  # Red color with lower opacity for error fill
tests/compare_payloads.py-12-12 (1)

12-12: Fix import module name - typo in module reference.

Import references rand25_parser but the actual module is rand256_parser. This will cause an ImportError at runtime.

-from valetudo_map_parser.config.rand25_parser import RRMapParser
+from valetudo_map_parser.config.rand256_parser import RRMapParser
tests/test_mvcrender.py-71-74 (1)

71-74: Missing f-string prefix causes incorrect output.

Line 73 is missing the f prefix, so it will print the literal string {ITERATIONS} instead of the value 100.

🔎 Proposed fix
 # Test 1: AutoCrop with rotation (most complex operation)
 print(f"\n{'=' * 70}")
-print("Test 1: AutoCrop with rotation ({ITERATIONS} iterations)")
+print(f"Test 1: AutoCrop with rotation ({ITERATIONS} iterations)")
 print(f"{'=' * 70}")
tests/rooms_test.py-119-125 (1)

119-125: os.makedirs with empty string path.

os.path.dirname("test.json") returns an empty string since the file has no directory component. os.makedirs("", exist_ok=True) will either fail or be a no-op depending on the OS. This logic should be removed or handle the edge case.

🔎 Proposed fix
-        os.makedirs(os.path.dirname(test_data_path), exist_ok=True)
+        dir_name = os.path.dirname(test_data_path)
+        if dir_name:  # Only create directory if there's a directory component
+            os.makedirs(dir_name, exist_ok=True)
         with open(test_data_path, "w", encoding="utf-8") as file:
             json.dump(sample_data, file, indent=2)
new_tests/integration/test_basic_integration.py-52-65 (1)

52-65: Missing image cleanup in test.

Similar to the calibration points test, the image from line 59 should be closed.

🔎 Proposed fix
     @pytest.mark.asyncio
     async def test_hypfer_room_detection(self, hypfer_json_data, vacuum_id, device_info):
         """Test that rooms are detected from JSON."""
         manager = CameraSharedManager(vacuum_id, device_info)
         shared = manager.get_instance()
         handler = HypferMapImageHandler(shared)

-        await handler.async_get_image(hypfer_json_data)
+        image, _ = await handler.async_get_image(hypfer_json_data)

         # Check if rooms were detected
         room_store = RoomStore(vacuum_id)
         rooms = room_store.get_rooms()
         # Should have detected some rooms (depends on test data)
         assert isinstance(rooms, dict)
+
+        # Clean up
+        if image:
+            image.close()
new_tests/integration/test_basic_integration.py-39-50 (1)

39-50: Missing image cleanup in test.

The image generated on line 46 is not closed, which could lead to resource leaks during test runs.

🔎 Proposed fix
     @pytest.mark.asyncio
     async def test_hypfer_calibration_points(self, hypfer_json_data, vacuum_id, device_info):
         """Test that calibration points are generated."""
         manager = CameraSharedManager(vacuum_id, device_info)
         shared = manager.get_instance()
         handler = HypferMapImageHandler(shared)

         image, metadata = await handler.async_get_image(hypfer_json_data)

         # Check calibration points were set (may be None if image generation had errors)
         # This is acceptable as the library may have issues with certain data
         assert shared.attr_calibration_points is None or isinstance(shared.attr_calibration_points, list)
+
+        # Clean up
+        if image:
+            image.close()
tests/test_parser_comparison.py-221-230 (1)

221-230: Inconsistent pixel flag usage between test calls.

Line 224 calls compare_parsers which internally sets pixels=True, but the comment on line 223 says "basic parsing (pixels=False)". Then line 229 calls test_with_pixels which also uses pixels=True. Both tests are effectively the same.

🔎 Proposed fix

Update compare_parsers to use pixels=False for basic parsing:

 def compare_parsers(payload_file: str, runs: int = 5):
     ...
     # Profile both parsers
     rand25_stats = profile_parser(
-        Rand25Parser(), "RAND25", payload, pixels=True, runs=runs
+        Rand25Parser(), "RAND25", payload, pixels=False, runs=runs
     )
     rand256_stats = profile_parser(
-        Rand256Parser(), "RAND256", payload, pixels=True, runs=runs
+        Rand256Parser(), "RAND256", payload, pixels=False, runs=runs
     )

Committable suggestion skipped: line range outside the PR's diff.

new_tests/integration/test_basic_integration.py-94-106 (1)

94-106: Missing image cleanup in Rand256 calibration test.

Same issue as Hypfer tests - the image should be closed after use.

🔎 Proposed fix
         # Check calibration points were set (may be None if image generation had errors)
         # This is acceptable as the library may have issues with certain data
         assert shared.attr_calibration_points is None or isinstance(shared.attr_calibration_points, list)
+
+        # Clean up
+        if image:
+            image.close()
backups/hypfer_rooms_handler.py-21-22 (1)

21-22: Same typo in class docstring.

🔎 Suggested fix
-    Handler for extracting and managing room data from Hipfer vacuum maps.
+    Handler for extracting and managing room data from Hypfer vacuum maps.
backups/hypfer_rooms_handler.py-31-40 (1)

31-40: Typo in method docstring: "HipferRoomsHandler" should be "HypferRoomsHandler".

🔎 Suggested fix
-        Initialize the HipferRoomsHandler.
+        Initialize the HypferRoomsHandler.
backups/hypfer_rooms_handler.py-2-6 (1)

2-6: Typo: "Hipfer" should be "Hypfer"

The module docstring and class docstrings consistently misspell "Hypfer" as "Hipfer".

🔎 Suggested fix
 """
-Hipfer Rooms Handler Module.
-Handles room data extraction and processing for Valetudo Hipfer vacuum maps.
+Hypfer Rooms Handler Module.
+Handles room data extraction and processing for Valetudo Hypfer vacuum maps.
 Provides async methods for room outline extraction and properties management.
 Version: 0.1.9
 """
backups/hypfer_handler_ori.py-97-109 (1)

97-109: Type hint uses json module as type annotation.

Line 100 uses m_json: json | None which refers to the json module, not a JSON data type. This should be dict | None or a more specific type like Dict[str, Any].

🔎 Suggested fix
     async def async_get_image_from_json(
         self,
-        m_json: json | None,
+        m_json: dict | None,
         return_webp: bool = False,
     ) -> WebPBytes | Image.Image | None:
backups/drawable.ori-34-40 (1)

34-40: Misleading comment: both colors are red-based.

The comment for ERROR_OUTLINE says "Red color" but the value is (0, 0, 0, 255) which is black, not red.

🔎 Suggested fix
-    ERROR_OUTLINE: Color = (0, 0, 0, 255)  # Red color for error messages
+    ERROR_OUTLINE: Color = (0, 0, 0, 255)  # Black outline for error states
     ERROR_COLOR: Color = (
         255,
         0,
         0,
         191,
-    )  # Red color with lower opacity for error outlines
+    )  # Red fill color with transparency for error states
backups/map_data_ori.py-451-459 (1)

451-459: Invalid return type annotation syntax.

tuple or list is not valid Python type syntax. Use Union[tuple, list] or the proper return type.

🔎 Suggested fix
     @staticmethod
     async def async_get_rrm_segments(
         json_data: JsonType,
         size_x: int,
         size_y: int,
         pos_top: int,
         pos_left: int,
         out_lines: bool = False,
-    ) -> tuple or list:
+    ) -> Union[Tuple[list, list], list]:

Committable suggestion skipped: line range outside the PR's diff.

🧹 Nitpick comments (54)
new_tests/config/__init__.py (1)

1-1: Minor typo: double period in docstring.

The docstring has an extra period at the end ("module..").

Suggested fix
-"""Tests for config module."""
+"""Tests for config module."""

Note: If the double period was intentional, disregard this comment.

tests/debug_manage_rooms.py (1)

13-16: Consider using specific exception type instead of bare except.

The bare except: clause catches all exceptions, including SystemExit and KeyboardInterrupt, which may interfere with debugging and interrupt handling.

🔎 Suggested improvement
 try:
     asyncio.run(main())
-except Exception:
+except Exception:
     traceback.print_exc(file=sys.stdout)

Note: The current code already uses Exception, so this is already correct. No changes needed.

backups/refactored_old_code.py (1)

1-44: Consider removing commented-out code and rely on version control.

This entire file contains only commented-out code. Version control systems like Git are designed to preserve code history, making it unnecessary to keep large blocks of commented code in the repository.

If this code needs to be preserved for reference:

  • Consider documenting the approach in a markdown file with a reference to the relevant commit
  • Or simply rely on Git history to access the old implementation
new_tests/README.md (1)

7-21: Consider adding language specifier to fenced code block.

The directory structure visualization uses a fenced code block without a language identifier. While not critical, adding a language specifier improves rendering and syntax highlighting.

🔎 Suggested improvement
-```
+```text
 new_tests/
 ├── conftest.py                          # Pytest fixtures and configuration
 ├── config/                              # Tests for config module

Based on static analysis hints.

tests/analyze_room12.py (2)

79-79: Move scipy import to module level.

The scipy import is placed inside the function, which is unconventional. Move it to the top of the file with other imports for better clarity and to catch import errors early.

🔎 Proposed fix
 import numpy as np
+from scipy import ndimage
 

 
 # Set up logging
         _LOGGER.info(
             f"Scaled bounding box: X: {x_min * pixel_size}-{x_max * pixel_size}, Y: {y_min * pixel_size}-{y_max * pixel_size}"
         )
 
         # Check if there's a small isolated region
         # Count connected components
-        from scipy import ndimage
-
         labeled_array, num_features = ndimage.label(mask)

101-112: Hardcoded coordinates suggest incomplete implementation.

The script uses hardcoded coordinates (3345-3350, 2535-2540) to identify a "problematic component." This suggests the script is a work-in-progress diagnostic tool rather than a general-purpose analyzer.

Consider parameterizing these coordinates or removing this section if it was intended for one-time debugging.

tests/debug_binary.py (1)

164-164: Hardcoded filename limits reusability.

The script hardcodes map_data_20250728_194519.bin. Consider accepting the filename as a command-line argument for better flexibility.

🔎 Proposed enhancement
 def main():
     """Main function."""
-    payload_file = "map_data_20250728_194519.bin"
+    import sys
+    payload_file = sys.argv[1] if len(sys.argv) > 1 else "map_data_20250728_194519.bin"
 
     if not os.path.exists(payload_file):
-        print(f"Payload file {payload_file} not found!")
+        print(f"Usage: {sys.argv[0]} [payload_file]")
+        print(f"Payload file not found: {payload_file}")
         return
tests/analyze_segment_walls.py (1)

99-134: O(n*m) complexity may cause performance issues.

The find_adjacent_pixels function uses nested loops over all segment and wall coordinates, resulting in O(n*m) complexity. For large maps, this could be very slow.

Consider using spatial data structures (e.g., KD-tree or grid-based lookup) for better performance:

from collections import defaultdict

def find_adjacent_pixels(segment_pixels, wall_pixels):
    """Find adjacent pixels using grid-based lookup."""
    # Build grid of wall pixels
    wall_grid = set()
    for x, y, length in wall_pixels:
        for i in range(length):
            wall_grid.add((x + i, y))
    
    adjacent_pairs = []
    for x, y, length in segment_pixels:
        for i in range(length):
            sx, sy = x + i, y
            # Check 8 adjacent positions
            for dx in [-1, 0, 1]:
                for dy in [-1, 0, 1]:
                    if (sx + dx, sy + dy) in wall_grid:
                        adjacent_pairs.append(((sx, sy), (sx + dx, sy + dy)))
                        break
    
    return adjacent_pairs
tests/analyze_room_connections.py (2)

104-150: Brute-force distance calculation is inefficient.

The minimum distance calculation uses nested loops over all pixel pairs, resulting in O(n*m) complexity. For rooms with thousands of pixels, this will be extremely slow.

Consider using scipy's distance_transform_edt or cdist for much better performance:

from scipy.spatial.distance import cdist

# Calculate minimum distance between Room 2 and Room 7
if len(room2_indices[0]) > 0 and len(room7_indices[0]) > 0:
    room2_coords = np.column_stack((room2_indices[1], room2_indices[0]))
    room7_coords = np.column_stack((room7_indices[1], room7_indices[0]))
    
    # Compute pairwise distances
    distances = cdist(room2_coords, room7_coords, metric='euclidean')
    
    # Find minimum
    min_idx = np.unravel_index(distances.argmin(), distances.shape)
    min_distance = distances[min_idx]
    closest_point_room2 = tuple(room2_coords[min_idx[0]])
    closest_point_room7 = tuple(room7_coords[min_idx[1]])

45-79: Duplicated segment processing logic.

The segment processing code appears twice (lines 45-79 and lines 160-197) with nearly identical logic. Extract this into a reusable function to follow DRY principles.

🔎 Proposed refactoring
def process_segment(layer, height, width):
    """Process a single segment layer into a mask."""
    segment_id = layer.get("metaData", {}).get("segmentId")
    name = layer.get("metaData", {}).get("name", f"Room {segment_id}")
    
    compressed_pixels = layer.get("compressedPixels", [])
    pixels = [
        compressed_pixels[i : i + 3]
        for i in range(0, len(compressed_pixels), 3)
    ]
    
    mask = np.zeros((height, width), dtype=np.uint8)
    for pixel_run in pixels:
        x, y, length = pixel_run
        if 0 <= y < height and 0 <= x < width and x + length <= width:
            mask[y, x : x + length] = 1
    
    return segment_id, name, mask

# Use in both places
for layer in data.get("layers", []):
    if layer.get("__class") == "MapLayer" and layer.get("type") == "segment":
        segment_id, name, room_mask = process_segment(layer, height, width)
        # ... rest of logic

Also applies to: 160-197

tests/tests/test_all_bins.py (2)

189-190: Searching parent directory for test files is fragile.

The script searches for .bin files in the parent directory (..), which makes the test dependent on directory structure and could find unintended files.

Consider:

  1. Using a dedicated test data directory
  2. Accepting test file paths as command-line arguments
  3. Using a configuration file to specify test files
# Example: Use dedicated test directory
test_dir = os.path.join(os.path.dirname(__file__), "test_data")
bin_files = [f for f in os.listdir(test_dir) if f.endswith(".bin")]

62-75: Handle JSON parsing for consistency.

The special case for NEW_RAND256 returning a JSON string is handled well, but consider documenting why this parser returns JSON while others return dictionaries.

Add a comment explaining the different return types:

# For new parser, result is JSON string (for serialization optimization)
# Other parsers return dict directly
if parser_name == "NEW_RAND256" and isinstance(result, str):
tests/test_rand_to_hypfer_compression.py (1)

97-98: Hardcoded segment ID limits reusability.

The script hardcodes segment ID 20. Consider accepting the segment ID as a command-line argument to test different segments.

import sys

segment_id = int(sys.argv[1]) if len(sys.argv) > 1 else 20
print(f"Testing compression on segment {segment_id}")
tests/convert_rand_to_hypfer.py (3)

11-38: Unused variable prev_y.

The variable prev_y is assigned on line 33 but never used. Consider removing it to clean up the code.

🔎 Proposed fix
 def compress_pixels(pixel_indices, image_width, image_height, image_top=0, image_left=0):
     """Convert Rand256 pixel indices to Hypfer compressed format."""
     if not pixel_indices:
         return []
     
     compressed = []
-    prev_x = prev_y = None
+    prev_x = None
     run_start_x = run_y = None
     run_length = 0
     
     for idx in pixel_indices:
         x = (idx % image_width) + image_left
         y = ((image_height - 1) - (idx // image_width)) + image_top
         
         if run_start_x is None:
             run_start_x, run_y, run_length = x, y, 1
         elif y == run_y and x == prev_x + 1:
             run_length += 1
         else:
             compressed.extend([run_start_x, run_y, run_length])
             run_start_x, run_y, run_length = x, y, 1
         
-        prev_x, prev_y = x, y
+        prev_x = x
     
     if run_start_x is not None:
         compressed.extend([run_start_x, run_y, run_length])
     
     return compressed

41-71: Consider incremental min/max calculation for memory efficiency.

The current implementation expands all pixels into lists just to calculate min/max, which defeats the memory benefit of compression for large datasets. Consider calculating these incrementally.

🔎 Proposed memory-efficient implementation
 def calculate_dimensions(compressed_pixels):
     """Calculate min/max/mid/avg dimensions from compressed pixels."""
     if not compressed_pixels:
         return None
     
-    x_coords = []
-    y_coords = []
-    pixel_count = 0
+    x_min = x_max = None
+    y_min = y_max = None
+    x_sum = y_sum = pixel_count = 0
     
     for i in range(0, len(compressed_pixels), 3):
         x, y, length = compressed_pixels[i], compressed_pixels[i+1], compressed_pixels[i+2]
-        for j in range(length):
-            x_coords.append(x + j)
-            y_coords.append(y)
-            pixel_count += 1
+        x_end = x + length - 1
+        
+        if x_min is None:
+            x_min, x_max = x, x_end
+            y_min = y_max = y
+        else:
+            x_min = min(x_min, x)
+            x_max = max(x_max, x_end)
+            y_min = min(y_min, y)
+            y_max = max(y_max, y)
+        
+        # For average calculation
+        for j in range(length):
+            x_sum += x + j
+            y_sum += y
+        pixel_count += length
     
     return {
         "x": {
-            "min": min(x_coords),
-            "max": max(x_coords),
-            "mid": (min(x_coords) + max(x_coords)) // 2,
-            "avg": sum(x_coords) // len(x_coords)
+            "min": x_min,
+            "max": x_max,
+            "mid": (x_min + x_max) // 2,
+            "avg": x_sum // pixel_count
         },
         "y": {
-            "min": min(y_coords),
-            "max": max(y_coords),
-            "mid": (min(y_coords) + max(y_coords)) // 2,
-            "avg": sum(y_coords) // len(y_coords)
+            "min": y_min,
+            "max": y_max,
+            "mid": (y_min + y_max) // 2,
+            "avg": y_sum // pixel_count
         },
         "pixelCount": pixel_count
     }

74-80: Add error handling for file operations.

The function opens the file without a try-except block. Consider adding error handling for FileNotFoundError and json.JSONDecodeError.

🔎 Proposed fix
 def convert_rand_to_hypfer(rand_json_path, output_path):
     """Convert complete Rand256 JSON to Hypfer format."""
     
-    # Load Rand256 JSON
-    with open(rand_json_path, 'r') as f:
-        rand_data = json.load(f)
+    # Load Rand256 JSON
+    try:
+        with open(rand_json_path, 'r') as f:
+            rand_data = json.load(f)
+    except FileNotFoundError:
+        raise FileNotFoundError(f"Input file not found: {rand_json_path}")
+    except json.JSONDecodeError as e:
+        raise ValueError(f"Invalid JSON in {rand_json_path}: {e}")
backups/rand25_handler_rooms.py (4)

196-206: Unused assignment immediately overwritten.

img_np_array returned from _setup_robot_and_image (line 197) is immediately overwritten by async_copy_array (line 206). The first assignment is discarded.

🔎 Proposed simplification
                 (
-                    img_np_array,
+                    _,
                     robot_position,
                     robot_position_angle,
                 ) = await self._setup_robot_and_image(
                     m_json, size_x, size_y, colors, destinations
                 )

                 # Increment frame number
                 self.frame_number += 1
                 img_np_array = await self.async_copy_array(self.img_base_layer)

263-267: Incomplete room element mapping implementation.

The comment indicates this is "a simplification" with a pass statement. The element map for rooms is not being populated, which may affect room-specific features.

Would you like help implementing the room pixel identification logic for the element map?


430-430: Unnecessary del statements.

Explicitly deleting local variables (del room, corners, robot_x, robot_y) is unnecessary in Python. These are automatically garbage collected when the function returns.

🔎 Proposed fix
                 _LOGGER.debug("%s is in %s", self.file_name, self.robot_in_room["room"])
-                del room, corners, robot_x, robot_y  # free memory.
                 return temp

470-476: Inconsistent element management approach.

enable_element directly calls self.drawing_config.enable_element(), while disable_element uses the manage_drawable_elements utility. Consider using consistent approaches for both methods.

🔎 Proposed fix for consistency
     def enable_element(self, element_code: DrawableElement) -> None:
         """Enable drawing of a specific element."""
-        self.drawing_config.enable_element(element_code)
+        manage_drawable_elements(self, "enable", element_code=element_code)
tests/test_floor_data.py (1)

6-27: Consider importing the actual types instead of duplicating.

The test file duplicates TrimsData and FloorData definitions. While this enables standalone execution, it risks testing behavior that differs from the actual implementation in SCR/valetudo_map_parser/config/types.py. The real TrimsData has additional methods (from_json, to_json, from_dict, clear) not present here.

Consider importing from the actual module and adding integration tests that verify the real implementation, or ensure this duplication is kept in sync:

# Option 1: Import actual types
from SCR.valetudo_map_parser.config.types import TrimsData, FloorData

# Option 2: If standalone is required, add a note about sync requirements
# WARNING: Keep in sync with SCR/valetudo_map_parser/config/types.py
tests/tests/test_robot_angles.py (2)

8-9: Path manipulation may be fragile.

The path insertion assumes a specific directory structure relative to the script. This could fail if the script is run from a different working directory. Consider using a more robust approach.

🔎 Proposed fix
 # Add the SCR directory to Python path
-sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "SCR")))
+# Navigate from tests/tests/ to repository root, then to SCR
+sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..", "SCR")))

Note: The current path joins tests/tests/ with SCR, resulting in tests/tests/SCR which likely doesn't exist.


26-189: Script serves as documentation/analysis rather than automated tests.

This is an exploratory analysis script comparing angle calculation approaches, not a pytest-compatible test suite. The formatted output and recommend_solution() function that prints code suggestions make this clear.

Consider:

  1. Renaming to analyze_robot_angles.py to clarify its purpose
  2. Moving to a scripts/ or analysis/ directory
  3. If test coverage is needed, create separate pytest-compatible tests
backups/drawable_ori.py (1)

164-179: Dead code: pole_alpha blending condition will never trigger.

pole_alpha is hardcoded to 255 on line 164, so the condition if pole_alpha < 255: on line 176 will never be true. The blending code on lines 177-179 is unreachable.

🔎 Suggested fix
         # Get blended colors for flag and pole
         flag_alpha = flag_color[3] if len(flag_color) == 4 else 255
         pole_color_base = (0, 0, 255)  # Blue for the pole
-        pole_alpha = 255
 
         # Blend flag color if needed
         if flag_alpha < 255:
             flag_color = ColorsManagement.sample_and_blend_color(
                 layer, x, y, flag_color
             )
 
         # Create pole color with alpha
-        pole_color: Color = (*pole_color_base, pole_alpha)
-
-        # Blend pole color if needed
-        if pole_alpha < 255:
-            pole_color = ColorsManagement.sample_and_blend_color(
-                layer, x, y, pole_color
-            )
+        pole_color: Color = (*pole_color_base, 255)
backups/colors.py (3)

269-276: Unnecessary use of enumerate - index is unused.

The loop uses enumerate but discards the index with _. Simply iterate over the zipped values directly.

🔎 Suggested fix
             # Process RGB colors
             result = []
-            for _, (alpha, rgb) in enumerate(zip(alphas, rgb_colors)):
+            for alpha, rgb in zip(alphas, rgb_colors):
                 if rgb is None:
                     result.append((0, 0, 0, int(alpha)))
                 else:
                     result.append((rgb[0], rgb[1], rgb[2], int(alpha)))

489-512: Questionable optimization: ndimage.map_coordinates overhead for single-pixel sampling.

Using ndimage.map_coordinates for sampling a single pixel (4 separate calls for each channel) adds significant overhead compared to direct indexing. The comment claims this is more efficient for large arrays, but for single-point lookups, direct indexing array[y, x] is O(1) and much faster.

Consider removing the ndimage path for single-point sampling, or document specific benchmarks that justify this approach.


822-824: Performance: list(SupportedColor).index() is O(n) per call.

Creating a list from the enum and calling .index() on every get_colour call is inefficient. Consider caching this mapping.

🔎 Suggested fix
# Add as class attribute in ColorsManagement.__init__:
self._color_index_map = {color: idx for idx, color in enumerate(SupportedColor) 
                         if not color.startswith(SupportedColor.COLOR_ROOM_PREFIX)}

# Then in get_colour:
try:
    index = self._color_index_map[supported_color]
    return self.user_colors[index]
except KeyError:
    ...
tests/PROFILING_README.md (1)

102-124: Add language specifiers to Python code blocks.

The Python code blocks at lines 102-111 and 114-124 should specify the language for better syntax highlighting and accessibility.

🔎 Proposed fix

Update the code fence opening at line 102:

-```
+```python

Update the code fence opening at line 114:

-```
+```python
new_tests/IMPLEMENTATION_SUMMARY.md (1)

33-54: Consider adding language specifier to directory tree.

The directory tree structure code block could specify a language (e.g., tree or text) for consistency with other code blocks in the documentation.

🔎 Proposed fix
-```
+```text
 new_tests/
new_tests/config/test_colors.py (2)

106-112: Consider strengthening the assertion for mismatched lengths.

The test only verifies that the result is a list. Based on the implementation context, it would be more valuable to assert the expected behavior - whether it returns a truncated list, pads with defaults, or raises an exception.

🔎 Suggested improvement
     def test_add_alpha_to_rgb_mismatched_lengths(self):
         """Test adding alpha to RGB colors with mismatched lengths."""
         alpha_channels = [255.0, 128.0]
         rgb_colors = [(255, 0, 0), (0, 255, 0), (0, 0, 255)]
         result = ColorsManagement.add_alpha_to_rgb(alpha_channels, rgb_colors)
         # Should handle mismatch gracefully
         assert isinstance(result, list)
+        # Verify the specific handling behavior (e.g., truncation or padding)
+        # Adjust based on actual implementation behavior
+        assert len(result) == min(len(alpha_channels), len(rgb_colors))

155-163: Cache emptiness assertion may be fragile.

The assertion that color_cache is empty at initialization (line 160) could break if the implementation pre-populates the cache. Consider documenting this as testing current behavior rather than a requirement, or verify this is intentional.

tests/test_room_store.py (2)

15-38: Non-standard import pattern using importlib.

This manual module loading approach is unusual for pytest tests. It manually constructs the module path and loads dependencies, which is fragile and harder to maintain.

Consider using standard pytest configuration with conftest.py to set up the Python path, or use pytest.ini/pyproject.toml to configure the test paths. This would allow standard imports like:

from valetudo_map_parser.config.types import RoomStore
🔎 Alternative using conftest.py

In tests/conftest.py:

import sys
from pathlib import Path

# Add SCR to path once at test collection time
sys.path.insert(0, str(Path(__file__).parent.parent / "SCR"))

Then in this file:

-import importlib.util
-import logging
-import sys
-from pathlib import Path
-
-# Add SCR/valetudo_map_parser to path so relative imports work
-valetudo_path = Path(__file__).parent.parent / "SCR" / "valetudo_map_parser"
-if str(valetudo_path.parent) not in sys.path:
-    sys.path.insert(0, str(valetudo_path.parent))
-
-# Load const module first
-const_path = valetudo_path / "const.py"
-const_spec = importlib.util.spec_from_file_location("valetudo_map_parser.const", const_path)
-const_module = importlib.util.module_from_spec(const_spec)
-sys.modules["valetudo_map_parser.const"] = const_module
-const_spec.loader.exec_module(const_module)
-
-# Now load types module
-types_path = valetudo_path / "config" / "types.py"
-spec = importlib.util.spec_from_file_location("valetudo_map_parser.config.types", types_path)
-types = importlib.util.module_from_spec(spec)
-sys.modules["valetudo_map_parser.config.types"] = types
-spec.loader.exec_module(types)
-
-RoomStore = types.RoomStore
+import logging
+from valetudo_map_parser.config.types import RoomStore

41-46: Excessive logging in test functions.

The tests use extensive _LOGGER.info() calls which will produce verbose output during test runs. While useful for debugging, this can clutter pytest output.

Consider using pytest's built-in -v flag for verbosity or capsys/caplog fixtures for capturing output, and reserve explicit logging for failure diagnostics.

Also applies to: 51-53, 134-136

tests/test_status_text_performance.py (2)

114-114: Remove redundant import.

sys is already imported at the module level (line 10). This duplicate import inside the function is unnecessary.

-    import sys
     func_list_size = sys.getsizeof(status_text.compose_functions)

34-184: Consider adding pytest assertions for CI integration.

This performance test uses print statements for reporting but lacks pytest assertions. For CI integration, consider adding minimum performance thresholds or using pytest-benchmark for standardized benchmarking.

At minimum, the overhead check (lines 170-173) could be converted to an assertion:

🔎 Optional: Add assertion for CI
     if abs(loop_time - inline_time) < 2:  # Within 2ms for 10k calls
         print("✅ Function loop overhead is NEGLIGIBLE!")
+        # Optionally assert for CI
+        assert True, "Overhead within acceptable range"
     else:
         print(f"⚠️  Function loop adds ~{overhead_per_call:.2f} μs per call")
+        # Consider: assert overhead_per_call < MAX_ACCEPTABLE_OVERHEAD_US
tests/compare_payloads.py (2)

95-105: Hardcoded test data paths may not exist.

The payload file paths are hardcoded and reference specific test data files. The function handles missing files gracefully (lines 22-24), but consider adding these files to .gitignore if they contain sensitive data, or documenting their expected location/format.


51-75: Consider documenting magic numbers.

The header offset 0x14 (line 52) and block type 8 (line 61) are magic numbers. For maintainability, consider adding comments or using named constants to document their meaning.

-    # Find robot position block
-    offset = 0x14  # Start after header
+    # Find robot position block
+    HEADER_SIZE = 0x14  # 20 bytes: standard RRM header size
+    ROBOT_POSITION_BLOCK_TYPE = 8
+    offset = HEADER_SIZE
     robot_block_data = None

     while offset < len(payload) - 8:
         try:
             type_ = struct.unpack("<H", payload[offset : offset + 2])[0]
             hlength = struct.unpack("<H", payload[offset + 2 : offset + 4])[0]
             length = struct.unpack("<I", payload[offset + 4 : offset + 8])[0]

-            if type_ == 8:  # Robot position block
+            if type_ == ROBOT_POSITION_BLOCK_TYPE:
tests/test_mvcrender.py (1)

10-22: Unnecessary semicolons and minor style issues.

Lines 14, 15, 18, and 20 have trailing semicolons which are not idiomatic Python. While syntactically valid, they should be removed for consistency.

🔎 Proposed fix
     def __init__(self):
         self.trims = type("T", (),
                           {"to_dict": lambda self: {"trim_up": 0, "trim_down": 0, "trim_left": 0, "trim_right": 0}})()
-        self.offset_top = 0;
-        self.offset_down = 0;
+        self.offset_top = 0
+        self.offset_down = 0
         self.offset_left = 0;
         self.offset_right = 0
-        self.vacuum_state = "cleaning";
+        self.vacuum_state = "cleaning"
         self.image_auto_zoom = True
-        self.image_ref_width = 0;
+        self.image_ref_width = 0
         self.image_ref_height = 0
tests/benchmark_margins.py (2)

39-49: Async functions contain no await statements.

Both original_image_margins and optimized_image_margins are declared as async but contain no await expressions. This adds overhead without benefit.

🔎 Proposed fix
 # Original implementation for comparison
-async def original_image_margins(
+def original_image_margins(
     image_array: np.ndarray, detect_colour: tuple
 ) -> tuple[int, int, int, int]:
     """Original implementation of the image margins function"""
     nonzero_coords = np.column_stack(np.where(image_array != list(detect_colour)))
     # Calculate the trim box based on the first and last occurrences
     min_y, min_x, _ = np.min(nonzero_coords, axis=0)
     max_y, max_x, _ = np.max(nonzero_coords, axis=0)
     del nonzero_coords
     return min_y, min_x, max_x, max_y

And similarly for optimized_image_margins. Then update the benchmark calls to remove await.


99-101: Unused AutoCrop instance.

The auto_crop instance is created but never used in the benchmark. Either remove it or document why it's needed for setup.

🔎 Proposed fix
         # Create AutoCrop instance
         handler = DummyHandler()
-        auto_crop = AutoCrop(handler)
+        # auto_crop = AutoCrop(handler)  # Not used in margin benchmarks
tests/test_parser_comparison.py (1)

194-210: Redundant directory existence check.

os.path.exists(payload_dir) where payload_dir = "." will always return True since the current directory always exists. This check is effectively dead code.

🔎 Proposed fix
 def main():
     """Main profiling function."""
     payload_dir = "."
     runs = 5  # Number of runs for profiling

-    if not os.path.exists(payload_dir):
-        print(f"Payload directory {payload_dir} doesn't exist.")
-        print("Run your vacuum first to generate payload files.")
-        return
-
     # Find all payload files
     payload_files = [f for f in os.listdir(payload_dir) if f.endswith(".bin")]

     if not payload_files:

Or, if you want to support a configurable directory in the future, parameterize payload_dir.

new_tests/conftest.py (1)

145-151: Consider using scope="session" for the event loop fixture.

The event_loop fixture creates a new loop for each test. If tests are independent, this is fine, but consider scope="session" or scope="module" for better performance if tests can share a loop.

tests/test_hypfer_profiling.py (1)

254-257: Unconventional function naming __main__.

The function is named __main__ which is unusual (typically main). While it works, the double underscores suggest a dunder method which this is not. Consider renaming to main().

🔎 Proposed fix
-def __main__():
+def main():
     # Enable comprehensive profiling
     test = TestHypferImageHandler(enable_profiling=True)
     ...

 if __name__ == "__main__":
-    __main__()
+    main()
tests/rand_rooms_test.py (3)

125-197: Dead code: _process_segment_data method is never called.

The _process_segment_data method is defined but never used. The async_extract_room_properties method duplicates its logic instead of calling it. Either remove this method or refactor to use it.

🔎 Proposed fix

Refactor async_extract_room_properties to use _process_segment_data:

         # Process each segment
         if unsorted_id and self.segment_data and self.outlines:
             for idx, segment_id in enumerate(unsorted_id):
-                # Extract points from segment data
-                points = []
-                for x, y, _ in self.segment_data[idx]:
-                    points.append((int(x), int(y)))
-
-                if not points:
-                    continue
-
-                # Use convex hull to get the outline
-                ... (duplicate logic)
-
-                room_properties[room_id] = room_data
+                room_id, room_data = await self._process_segment_data(
+                    self.segment_data[idx], segment_id, pixel_size
+                )
+                if room_id and room_data:
+                    # Update room name from destinations if available
+                    if segment_id in room_id_to_data:
+                        room_info = room_id_to_data[segment_id]
+                        room_data["name"] = room_info.get("name", room_data["name"])
+                    room_properties[room_id] = room_data

230-278: Variable shadowing: room_data used for different purposes.

room_data is first assigned on line 232 as a list from dest_json.get("rooms", []), then reassigned on line 265 as a dictionary for each room's properties. This shadowing reduces readability.

🔎 Proposed fix
         # Process destinations JSON to get room names
         dest_json = destinations
-        room_data = dest_json.get("rooms", [])
-        room_id_to_data = {room["id"]: room for room in room_data}
+        dest_rooms = dest_json.get("rooms", [])
+        room_id_to_data = {room["id"]: room for room in dest_rooms}

         # Process each segment
         ...

108-123: Broad exception handling masks errors.

Catching all exceptions with except Exception as e in convex_hull_outline could hide bugs. Consider catching specific exceptions like scipy.spatial.QhullError.

🔎 Proposed fix
+from scipy.spatial import ConvexHull, QhullError
+
         try:
             # Calculate the convex hull
             hull = ConvexHull(points_array)
             ...
-        except Exception as e:
+        except (QhullError, ValueError) as e:
             _LOGGER.warning(f"Error calculating convex hull: {e}")
tests/rooms_test.py (2)

252-265: Fragile pattern using "mask_offset" in locals().

Checking for variables in locals() is fragile and error-prone. Consider restructuring to always define mask_offset (as None when not applicable) and check for None instead.

🔎 Proposed fix
+            mask_offset = None  # Initialize before the conditional
             # Optimization: Create a smaller mask for just the room area
             if not pixels:
                 # Skip if no pixels
                 mask = np.zeros((1, 1), dtype=np.uint8)
             else:
                 ...
-                        # Store the offset for later use when converting coordinates back
-                        mask_offset = (min_x, min_y)
+                        mask_offset = (min_x, min_y)
                 else:
                     mask = np.zeros((1, 1), dtype=np.uint8)

             ...

             # Extract contour from the mask
-            if "mask_offset" in locals():
+            if mask_offset is not None:
                 # If we're using a local mask, we need to adjust the coordinates
                 outline = convex_hull_outline(mask)
                 if outline:
                     # Adjust coordinates back to global space
                     offset_x, offset_y = mask_offset
                     outline = [(x + offset_x, y + offset_y) for (x, y) in outline]
-                    # Clear the mask_offset variable for the next iteration
-                    del mask_offset
+                    mask_offset = None  # Reset for next iteration

274-277: Redundant list comprehension.

Line 276 creates a new list with the same content: [(x, y) for (x, y) in outline]. This is a no-op and can be removed.

🔎 Proposed fix
-            # Use coordinates as-is without flipping Y coordinates
-            # This prevents the large Y values caused by height - 1 - y transformation
-            outline = [(x, y) for (x, y) in outline]
+            # Note: Using coordinates as-is without flipping Y coordinates
+            # This prevents the large Y values caused by height - 1 - y transformation
backups/hypfer_rooms_handler.py (1)

314-349: Method is marked async but contains no await expressions.

get_room_at_position doesn't perform any async operations. This adds unnecessary overhead from creating coroutines and requires callers to await unnecessarily. Consider making it a regular synchronous method.

🔎 Suggested fix
-    async def get_room_at_position(
+    def get_room_at_position(
         self, x: int, y: int, room_properties: Optional[RoomsProperties] = None
     ) -> Optional[Dict[str, Any]]:
backups/hypfer_handler_ori.py (1)

444-456: Inconsistent element enable/disable implementation.

enable_element directly calls self.drawing_config.enable_element() with extra logging, while disable_element uses the manage_drawable_elements helper. This inconsistency could lead to different behavior and makes the code harder to maintain.

🔎 Suggested fix
     def enable_element(self, element_code: DrawableElement) -> None:
         """Enable drawing of a specific element."""
-        self.drawing_config.enable_element(element_code)
-        LOGGER.info(
-            "%s: Enabled element %s, now enabled: %s",
-            self.file_name,
-            element_code.name,
-            self.drawing_config.is_enabled(element_code),
-        )
+        manage_drawable_elements(self, "enable", element_code=element_code)
 
     def disable_element(self, element_code: DrawableElement) -> None:
         """Disable drawing of a specific element."""
         manage_drawable_elements(self, "disable", element_code=element_code)
backups/test_old_pars.py (2)

53-107: Recursive parsing may cause stack overflow on large maps.

parse_block uses tail recursion to process blocks sequentially. For maps with many blocks, this could exceed Python's default recursion limit (~1000). Consider converting to an iterative approach.

🔎 Suggested iterative approach
     @staticmethod
     def parse_block(
         buf: bytes,
         offset: int,
         result: Optional[Dict[int, Any]] = None,
         pixels: bool = False,
     ) -> Dict[int, Any]:
         """Parse a block of data from the map data."""
         result = result or {}
-        if len(buf) <= offset:
-            return result
-
-        type_ = struct.unpack("<H", buf[offset : offset + 2])[0]
-        hlength = struct.unpack("<H", buf[offset + 2 : offset + 4])[0]
-        length = struct.unpack("<I", buf[offset + 4 : offset + 8])[0]
-        # ... block parsing logic ...
-        return RRMapParser.parse_block(buf, offset + length + hlength, result, pixels)
+        while offset < len(buf):
+            type_ = struct.unpack("<H", buf[offset : offset + 2])[0]
+            hlength = struct.unpack("<H", buf[offset + 2 : offset + 4])[0]
+            length = struct.unpack("<I", buf[offset + 4 : offset + 8])[0]
+            # ... block parsing logic (same as current) ...
+            offset = offset + length + hlength
+        return result

323-334: Unreachable exception handler - dict.get() never raises KeyError.

The try/except KeyError block is dead code since dict.get() returns None (or default) instead of raising KeyError when a key is missing.

🔎 Suggested fix
     @staticmethod
     def get_rrm_goto_target(json_data: JsonType) -> list or None:
         """Get the goto target from the json."""
-        try:
-            path_data = json_data.get("goto_target", {})
-        except KeyError:
-            return None
-
+        path_data = json_data.get("goto_target")
-        if path_data and path_data != []:
+        if path_data:
             path_data = RandImageData.rrm_coordinates_to_valetudo(path_data)
             return path_data
         return None

The same issue exists in get_rrm_virtual_walls (Lines 401-409).

backups/map_data_ori.py (1)

40-43: Unnecessary try/except - dict.get() doesn't raise KeyError.

🔎 Suggested fix
     @staticmethod
     def get_obstacles(entity_dict: dict) -> list:
         """Get the obstacles positions from the entity data."""
-        try:
-            obstacle_data = entity_dict.get("obstacle")
-        except KeyError:
-            return []
+        obstacle_data = entity_dict.get("obstacle")
         obstacle_positions = []
backups/drawable.ori (1)

906-908: Potential AttributeError when path_font is empty string.

Line 908 calls path_font.endswith("VT.ttf"), but if path_font is an empty string (which passes the if path_font: check on Line 892), the method call works but Line 906-907 would have already set font = user_font. However, the logic is convoluted - if path_font is provided but fails to load, user_font equals default_font, making the endswith check pointless in that case.

🔎 Suggested fix for clarity
             if "\u2211" in text or "\u03de" in text:
                 font = default_font
                 width = None
             else:
                 font = user_font
-                width = 2 if path_font.endswith("VT.ttf") else None
+                width = 2 if path_font and path_font.endswith("VT.ttf") else None

Comment on lines +1 to +29
{
"branchPrefix": "dev/",
"dryRun": "full",
"username": "renovate-release",
"gitAuthor": "Renovate Bot <bot@renovateapp.com>",
"onboarding": false,
"platform": "github",
"includeForks": true,
"repositories": [
"renovatebot/github-action",
"renovate-tests/cocoapods1",
"renovate-tests/gomod1"
],
"packageRules": [
{
"description": "lockFileMaintenance",
"matchUpdateTypes": [
"pin",
"digest",
"patch",
"minor",
"major",
"lockFileMaintenance"
],
"dependencyDashboardApproval": false,
"stabilityDays": 0
}
]
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Renovate configuration appears to be a template, not customized for this repository.

This configuration has several issues that suggest it's a generic template rather than properly configured:

  1. Location: Renovate configs are typically placed at the repository root as renovate.json or in .github/renovate.json, not in .github/workflows/.

  2. Repository list (lines 9-13): Contains generic Renovate test repositories (renovatebot/github-action, renovate-tests/cocoapods1, renovate-tests/gomod1) instead of this repository's identifier.

  3. Dry-run mode (line 3): "dryRun": "full" means Renovate will only simulate updates without creating actual PRs. This is typically used for testing, not production.

Either remove this file if Renovate isn't needed, or properly configure it with:

  • Correct repository identifier
  • Appropriate dry-run setting (usually false or omit for production use)
  • Correct file location
🤖 Prompt for AI Agents
.github/workflows/renovate.json lines 1-29: this Renovate config looks like a
generic template placed in the wrong location, lists unrelated test
repositories, and is set to full dry-run; either delete it if you don't intend
to run Renovate, or move it to the repo root (renovate.json) or
.github/renovate.json and update the contents: remove or replace the
"repositories" list with the current repository identifier, set "dryRun" to
false or remove the key for production operation, and adjust any other fields
(e.g., branchPrefix, includeForks, packageRules) to repository-specific values
before committing.


# Avoid division by zero
if out_alpha < 0.0001:
return Color[0, 0, 0, 0] # Fully transparent result
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Syntax error: Using brackets instead of parentheses for tuple.

Line 440 uses Color[0, 0, 0, 0] which is invalid Python syntax (attempts indexing). Should be (0, 0, 0, 0) to create a tuple.

🔎 Suggested fix
         # Avoid division by zero
         if out_alpha < 0.0001:
-            return Color[0, 0, 0, 0]  # Fully transparent result
+            return (0, 0, 0, 0)  # Fully transparent result
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return Color[0, 0, 0, 0] # Fully transparent result
# Avoid division by zero
if out_alpha < 0.0001:
return (0, 0, 0, 0) # Fully transparent result
🤖 Prompt for AI Agents
In backups/colors.py around line 440, the code uses invalid indexing syntax
"Color[0, 0, 0, 0]" which raises a SyntaxError; replace that with a proper tuple
literal "(0, 0, 0, 0)" so the function returns a fully transparent RGBA tuple
instead of attempting to index Color.

Comment on lines +66 to +99
for x, y, z in pixels:
col = x * pixel_size
row = y * pixel_size
# Draw pixels as blocks
for i in range(z):
# Get the region to update
region_slice = (
slice(row, row + pixel_size),
slice(col + i * pixel_size, col + (i + 1) * pixel_size),
)

if need_blending:
# Sample the center of the region for blending
center_y = row + pixel_size // 2
center_x = col + i * pixel_size + pixel_size // 2

# Only blend if coordinates are valid
if (
0 <= center_y < image_array.shape[0]
and 0 <= center_x < image_array.shape[1]
):
# Get blended color
blended_color = ColorsManagement.sample_and_blend_color(
image_array, center_x, center_y, full_color
)
# Apply blended color to the region
image_array[region_slice] = blended_color
else:
# Use original color if out of bounds
image_array[region_slice] = full_color
else:
# No blending needed, use direct assignment
image_array[region_slice] = full_color

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing bounds check before region assignment.

The loop draws pixel blocks using region_slice but doesn't validate that row and col are within the image bounds before assignment. If pixels contains coordinates outside the image dimensions, the slice assignment on lines 92, 95, and 98 could write to invalid regions or cause unexpected behavior.

🔎 Suggested fix
         for x, y, z in pixels:
             col = x * pixel_size
             row = y * pixel_size
+            # Skip pixels outside image bounds
+            if row < 0 or row >= image_array.shape[0] or col < 0:
+                continue
             # Draw pixels as blocks
             for i in range(z):
+                end_col = col + (i + 1) * pixel_size
+                if end_col > image_array.shape[1]:
+                    continue
                 # Get the region to update
                 region_slice = (
                     slice(row, row + pixel_size),
                     slice(col + i * pixel_size, col + (i + 1) * pixel_size),
                 )
🤖 Prompt for AI Agents
In backups/drawable_ori.py around lines 66 to 99, the code assigns to
region_slice without verifying the slice lies inside image_array bounds; fix by
clamping the region coordinates before assignment: compute top=row,
left=col+i*pixel_size, bottom=top+pixel_size, right=left+pixel_size, then clamp
top/bottom to [0, image_array.shape[0]] and left/right to [0,
image_array.shape[1]] to produce valid slices; if after clamping bottom<=top or
right<=left skip the assignment entirely; keep the existing center sampling
branch but ensure sampling only happens when the center is inside the clamped
region (or skip blending if not).

Comment on lines +346 to +415
def _filled_circle(
image: NumpyArray,
center: Point,
radius: int,
color: Color,
outline_color: Color = None,
outline_width: int = 0,
) -> NumpyArray:
"""
Draw a filled circle on the image using NumPy.
Optimized to only process the bounding box of the circle.
"""
y, x = center
height, width = image.shape[:2]

# Calculate the bounding box of the circle
min_y = max(0, y - radius - outline_width)
max_y = min(height, y + radius + outline_width + 1)
min_x = max(0, x - radius - outline_width)
max_x = min(width, x + radius + outline_width + 1)

# Create coordinate arrays for the bounding box
y_indices, x_indices = np.ogrid[min_y:max_y, min_x:max_x]

# Calculate distances from center
dist_sq = (y_indices - y) ** 2 + (x_indices - x) ** 2

# Create masks for the circle and outline
circle_mask = dist_sq <= radius**2

# Apply the fill color
image[min_y:max_y, min_x:max_x][circle_mask] = color

# Draw the outline if needed
if outline_width > 0 and outline_color is not None:
outer_mask = dist_sq <= (radius + outline_width) ** 2
outline_mask = outer_mask & ~circle_mask
image[min_y:max_y, min_x:max_x][outline_mask] = outline_color

return image

@staticmethod
def _filled_circle_optimized(
image: np.ndarray,
center: Tuple[int, int],
radius: int,
color: Color,
outline_color: Color = None,
outline_width: int = 0,
) -> np.ndarray:
"""
Optimized _filled_circle ensuring dtype compatibility with uint8.
"""
x, y = center
h, w = image.shape[:2]
color_np = np.array(color, dtype=image.dtype)
outline_color_np = (
np.array(outline_color, dtype=image.dtype)
if outline_color is not None
else None
)
y_indices, x_indices = np.meshgrid(np.arange(h), np.arange(w), indexing="ij")
dist_sq = (y_indices - y) ** 2 + (x_indices - x) ** 2
circle_mask = dist_sq <= radius**2
image[circle_mask] = color_np
if outline_width > 0 and outline_color_np is not None:
outer_mask = dist_sq <= (radius + outline_width) ** 2
outline_mask = outer_mask & ~circle_mask
image[outline_mask] = outline_color_np
return image
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Inconsistent center parameter convention between _filled_circle and _filled_circle_optimized.

_filled_circle (line 358) unpacks center as y, x = center, while _filled_circle_optimized (line 399) unpacks as x, y = center. This inconsistency is confusing and error-prone for callers.

🔎 Suggested fix - standardize to (x, y) convention
     @staticmethod
     def _filled_circle(
         image: NumpyArray,
         center: Point,
         radius: int,
         color: Color,
         outline_color: Color = None,
         outline_width: int = 0,
     ) -> NumpyArray:
         """
         Draw a filled circle on the image using NumPy.
         Optimized to only process the bounding box of the circle.
         """
-        y, x = center
+        x, y = center
         height, width = image.shape[:2]

Note: This will require updating all call sites (e.g., lines 597, 622-623, 630-631) to pass (x, y) instead of (y, x).

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In backups/drawable_ori.py around lines 346 to 415, the two circle helpers use
different center conventions (line ~358 uses y,x while line ~399 uses x,y);
standardize both to the (x, y) convention: change _filled_circle to unpack
center as x, y = center, update all internal uses so min_x/max_x and min_y/max_y
and distance calculations use x and y consistently (swap the current y/x
usages), update the docstring to state center is (x, y), and then update all
call sites (e.g., lines ~597, 622-623, 630-631) to pass (x, y) if they currently
pass (y, x).

Comment on lines +639 to +656
@staticmethod
def overlay_robot(
background_image: NumpyArray, robot_image: NumpyArray, x: int, y: int
) -> NumpyArray:
"""
Overlay the robot image on the background image at the specified coordinates.
"""
robot_height, robot_width, _ = robot_image.shape
robot_center_x = robot_width // 2
robot_center_y = robot_height // 2
top_left_x = x - robot_center_x
top_left_y = y - robot_center_y
bottom_right_x = top_left_x + robot_width
bottom_right_y = top_left_y + robot_height
background_image[top_left_y:bottom_right_y, top_left_x:bottom_right_x] = (
robot_image
)
return background_image
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing bounds check in overlay_robot can cause IndexError.

The method directly assigns robot_image to a slice of background_image without validating that the calculated coordinates are within bounds. If the robot is positioned near the image edges, top_left_x, top_left_y, bottom_right_x, or bottom_right_y could be negative or exceed image dimensions.

🔎 Suggested fix
     @staticmethod
     def overlay_robot(
         background_image: NumpyArray, robot_image: NumpyArray, x: int, y: int
     ) -> NumpyArray:
         """
         Overlay the robot image on the background image at the specified coordinates.
         """
         robot_height, robot_width, _ = robot_image.shape
+        bg_height, bg_width = background_image.shape[:2]
         robot_center_x = robot_width // 2
         robot_center_y = robot_height // 2
         top_left_x = x - robot_center_x
         top_left_y = y - robot_center_y
         bottom_right_x = top_left_x + robot_width
         bottom_right_y = top_left_y + robot_height
+        
+        # Clamp to image bounds
+        src_x_start = max(0, -top_left_x)
+        src_y_start = max(0, -top_left_y)
+        dst_x_start = max(0, top_left_x)
+        dst_y_start = max(0, top_left_y)
+        dst_x_end = min(bg_width, bottom_right_x)
+        dst_y_end = min(bg_height, bottom_right_y)
+        
+        if dst_x_start >= dst_x_end or dst_y_start >= dst_y_end:
+            return background_image  # Robot completely outside bounds
+        
+        src_x_end = src_x_start + (dst_x_end - dst_x_start)
+        src_y_end = src_y_start + (dst_y_end - dst_y_start)
+        
-        background_image[top_left_y:bottom_right_y, top_left_x:bottom_right_x] = (
-            robot_image
-        )
+        background_image[dst_y_start:dst_y_end, dst_x_start:dst_x_end] = (
+            robot_image[src_y_start:src_y_end, src_x_start:src_x_end]
+        )
         return background_image
🤖 Prompt for AI Agents
In backups/drawable_ori.py around lines 639 to 656, overlay_robot currently
computes top-left and bottom-right coordinates and assigns robot_image into
background without bounds checks which can raise IndexError when the robot is
near edges; fix by clamping top_left_x/top_left_y to at least 0 and
bottom_right_x/bottom_right_y to at most background width/height, compute the
overlapping width/height, and if overlap is non-positive return the unmodified
background; then copy only the overlapping slice of robot_image into the
corresponding background slice (accounting for channel dims) so no out-of-bounds
indexing occurs.

Comment on lines +76 to +80
if layer_type == "segment":
active_list.append(int(active_type["active"]))

for value in json_obj.items():
ImageData.find_layers(value, layer_dict, active_list)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential KeyError when accessing active_type["active"].

Line 77 accesses active_type["active"] without checking if the key exists. If metaData doesn't contain an "active" key, this will raise a KeyError.

🔎 Suggested fix
                 if layer_type == "segment":
-                    active_list.append(int(active_type["active"]))
+                    if active_type and "active" in active_type:
+                        active_list.append(int(active_type["active"]))
 
-            for value in json_obj.items():
+            for key, value in json_obj.items():
                 ImageData.find_layers(value, layer_dict, active_list)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if layer_type == "segment":
active_list.append(int(active_type["active"]))
for value in json_obj.items():
ImageData.find_layers(value, layer_dict, active_list)
if layer_type == "segment":
if active_type and "active" in active_type:
active_list.append(int(active_type["active"]))
for key, value in json_obj.items():
ImageData.find_layers(value, layer_dict, active_list)
🤖 Prompt for AI Agents
In backups/map_data_ori.py around lines 76 to 80, the code accesses
active_type["active"] directly which can raise a KeyError if the "active" key is
missing; update the logic to guard access (e.g., check "active" in active_type
or use active_type.get("active") and verify it's not None) before converting to
int and appending, or wrap the conversion in a try/except (ValueError/TypeError)
to skip invalid/missing values and optionally log a warning.

Comment on lines +79 to +80
for value in json_obj.items():
ImageData.find_layers(value, layer_dict, active_list)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Incorrect iteration over dict items.

json_obj.items() returns (key, value) tuples, but the code assigns the whole tuple to value and passes it to the recursive call. This will cause type errors since the function expects a dict or list.

🔎 Suggested fix
-            for value in json_obj.items():
-                ImageData.find_layers(value, layer_dict, active_list)
+            for key, value in json_obj.items():
+                ImageData.find_layers(value, layer_dict, active_list)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for value in json_obj.items():
ImageData.find_layers(value, layer_dict, active_list)
for key, value in json_obj.items():
ImageData.find_layers(value, layer_dict, active_list)
🤖 Prompt for AI Agents
In backups/map_data_ori.py around lines 79-80, the loop incorrectly assigns the
(key, value) tuple from json_obj.items() to a single variable and passes it to
ImageData.find_layers; change the iteration to either "for key, value in
json_obj.items(): ImageData.find_layers(value, layer_dict, active_list)" or
simply "for value in json_obj.values(): ImageData.find_layers(value, layer_dict,
active_list)" so the function receives the actual nested dict/list, and keep any
existing type checks to skip non-iterable primitives.

Comment on lines +229 to +247
# Convert virtual walls
if "virtual_walls" in rand_data and rand_data["virtual_walls"]:
for wall in rand_data["virtual_walls"]:
entities.append({
"__class": "LineMapEntity",
"metaData": {},
"type": "virtual_wall",
"points": wall
})

# Convert forbidden zones
if "forbidden_zones" in rand_data and rand_data["forbidden_zones"]:
for zone in rand_data["forbidden_zones"]:
entities.append({
"__class": "PolygonMapEntity",
"metaData": {},
"type": "no_go_area",
"points": zone
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find /tmp -name "Python-package-valetudo-map-parser" -type d 2>/dev/null || echo "Repo not found in /tmp, will need to clone or search"

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 67


🏁 Script executed:

#!/bin/bash

# Find tests/convert_rand_to_hypfer.py
fd "convert_rand_to_hypfer.py" 2>/dev/null | head -20

# Also check if repo exists locally
find . -name "convert_rand_to_hypfer.py" 2>/dev/null | head -10

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 150


🏁 Script executed:

#!/bin/bash
# First, understand the repository structure
if [ -d ".git" ]; then
    echo "Found git repository"
    git log --oneline | head -5
    git remote -v
    pwd
else
    echo "Not in a git repository"
    ls -la | head -20
fi

# Try to locate the file
find . -name "convert_rand_to_hypfer.py" -type f 2>/dev/null

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 408


🌐 Web query:

Rand256 Valetudo virtual_walls forbidden_zones coordinates 10 division transformation

💡 Result:

Short answer: Valetudo expects integer map coordinates (virtual walls = pairs of (x,y) start/end; forbidden_zones = four (x,y) corners per zone). If your source coordinates are consistently 10× larger than Valetudo’s, convert by dividing each coordinate by 10 and rounding to an integer before sending to the REST/MQTT endpoints. References and examples:

  • REST API docs: virtual_walls are arrays of two (x,y) points; forbidden_zones are arrays of four (x,y) points. Use /api/forbidden_markers or /api/persistent_data to PUT them. [1].
  • Valetudo mapper / MQTT examples show zone and spot coordinate formats (spot: [x,y]; zone: [x1,y1,x2,y2,repeat]). [2].
  • Valetudo release notes: map now shows coordinates (useful to verify coordinate scale). [3].

If you want, paste a sample coordinate array you have and I’ll convert it (divide-by-10 and round) into the correct Valetudo format.

Sources:
[1] Valetudo REST API (forbidden_markers/persistent_data).
[2] Valetudo MQTT / mapper examples (zone/spot formats).
[3] rand256/valetudo release notes (map coordinates feature).


🌐 Web query:

GitHub sca075 Rand256 coordinates virtual walls scale conversion divide 10

💡 Result:

Short answer: Valetudo/rand256 uses "full‑scale" units (1 full‑scale = 0.1 m). To convert those coordinates to meters divide by 10 (or multiply physical meters by 10 to get full‑scale units). Example: full‑scale x=123 → 12.3 m. (qlabs.quanserdocs.com)

Citations:


🏁 Script executed:

#!/bin/bash
# Read the entire convert_rand_to_hypfer.py file to understand context
wc -l ./tests/convert_rand_to_hypfer.py
echo "---"
cat -n ./tests/convert_rand_to_hypfer.py | head -300

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 11159


🏁 Script executed:

#!/bin/bash
# Look at lines 229-247 specifically and surrounding context
sed -n '200,280p' ./tests/convert_rand_to_hypfer.py | cat -n

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 3182


🏁 Script executed:

#!/bin/bash
# Search for where coordinates are divided by 10 in the file
grep -n "/ 10\|/10\|divide" ./tests/convert_rand_to_hypfer.py
echo "---"
# Also search for robot, charger, path handling
grep -n "robot_x\|charger\|path" ./tests/convert_rand_to_hypfer.py | head -30

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 1478


🏁 Script executed:

#!/bin/bash
# Look for the actual conversion function and understand the data flow
grep -n "def convert\|virtual_walls\|forbidden_zones" ./tests/convert_rand_to_hypfer.py

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 398


🏁 Script executed:

#!/bin/bash
# Check if there are tests that validate the coordinate transformations
grep -n "virtual_walls\|forbidden_zones" ./tests/convert_rand_to_hypfer.py
echo "---"
# Look for any test data or fixtures
ls -la ./tests/ | grep -E "\.json|test_"

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 1402


🏁 Script executed:

#!/bin/bash
# Check if there's a rand.json test file to see data structure
if [ -f "./tests/rand.json" ]; then
    echo "rand.json exists"
    head -50 ./tests/rand.json
else
    echo "No rand.json found"
fi

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 50398


🏁 Script executed:

#!/bin/bash
# Look for any rand256 handling to understand coordinate semantics
find . -name "*rand256*" -type f | head -10
grep -r "virtual_walls\|forbidden_zones" --include="*.py" | grep -v ".pyc" | head -20

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 2162


Divide virtual walls and forbidden zones coordinates by 10 for consistency with other entities.

Robot position, charger position, and path coordinates are divided by 10 during conversion (lines 195, 216, 226), but virtual walls (line 236) and forbidden zones (line 246) use raw coordinates without scaling. This inconsistency will cause incorrect positioning on the converted map. Apply the same coordinate transformation: divide each coordinate by 10 using integer division.

🤖 Prompt for AI Agents
In tests/convert_rand_to_hypfer.py around lines 229 to 247, virtual walls and
forbidden zones are appended using raw coordinates while other entities are
scaled by dividing coordinates by 10; update the loop bodies so each point
coordinate for virtual_walls (LineMapEntity points) and for forbidden_zones
(PolygonMapEntity points) is transformed by integer division by 10 (e.g., x//10,
y//10) before appending, preserving the same list/tuple structure as used for
robot/charger/path conversions.

# Add the SCR directory to Python path
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "SCR")))

from valetudo_map_parser.config.rand25_parser import RRMapParser
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if this is a test file for valetudo_map_parser
# and find what parser modules are actually available
find . -name "*parser*.py" -path "*/valetudo*" -o -name "*rand*.py" | head -20

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 339


🏁 Script executed:

#!/bin/bash
# List Python files in config directories related to parsers
find . -type f -name "*.py" | grep -E "(config|parser)" | sort

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 1263


🏁 Script executed:

#!/bin/bash
# Find Python parser modules in the codebase
fd -t f "\.py$" | xargs grep -l "rand.*[Pp]arser" | head -10
# Also look for config subdirectories
fd -t d "config"

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 319


Fix incorrect module name in import statement.

Line 12 imports from valetudo_map_parser.config.rand25_parser, but the actual module is named rand256_parser. Change the import to:

from valetudo_map_parser.config.rand256_parser import RRMapParser
🤖 Prompt for AI Agents
In tests/debug_binary.py around line 12, the import references the wrong module
name (`rand25_parser`) — update the import to use the correct module
`rand256_parser` so it reads (conceptually) from
valetudo_map_parser.config.rand256_parser importing RRMapParser; no other
changes needed.

Comment on lines +220 to +226
handler = HypferMapImageHandler(shared)

# Generate image
self.image = await handler.get_image_from_json(
self.test_data, return_webp=False
)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check the available methods on HypferMapImageHandler
rg -n "def (async_)?get_image" SCR/valetudo_map_parser/hypfer_handler.py

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 129


🏁 Script executed:

#!/bin/bash
# Check the actual method call in the test file
sed -n '220,230p' tests/test_hypfer_profiling.py | cat -n

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 553


🏁 Script executed:

#!/bin/bash
# Check all get_image related methods in hypfer_handler.py
rg -n "def.*get_image" SCR/valetudo_map_parser/hypfer_handler.py

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 129


🏁 Script executed:

#!/bin/bash
# Search entire codebase for any definitions of get_image_from_json (non-async)
rg -n "def get_image_from_json" SCR/

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 67


🏁 Script executed:

#!/bin/bash
# Check the HypferMapImageHandler class definition and inheritance
rg -A 3 "class HypferMapImageHandler" SCR/valetudo_map_parser/hypfer_handler.py

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 248


🏁 Script executed:

#!/bin/bash
# Check if BaseHandler has get_image_from_json method
rg -n "def.*get_image_from_json" SCR/valetudo_map_parser/

Repository: sca075/Python-package-valetudo-map-parser

Length of output: 171


Change method call from get_image_from_json to async_get_image_from_json.

The test calls a non-existent method. HypferMapImageHandler defines only async_get_image_from_json (line 275 of hypfer_handler.py), not get_image_from_json. Update the method call at line 223.

🤖 Prompt for AI Agents
In tests/test_hypfer_profiling.py around lines 220 to 226, the test calls
handler.get_image_from_json which does not exist; replace that call with the
coroutine name defined in hypfer_handler.py, async_get_image_from_json, and
await it the same way (i.e., call await
handler.async_get_image_from_json(self.test_data, return_webp=False)). Ensure
the method name is updated exactly and the await remains to execute the async
method.

@sca075 sca075 closed this Dec 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants